Inspired by the impressive success of contrastive learning (CL), a variety of graph augmentation strategies have been employed to learn node representations in a self-supervised manner. Existing methods construct the contrastive samples by adding perturbations to the graph structure or node attributes. Although impressive results are achieved, it is rather blind to the wealth of prior information assumed: with the increase of the perturbation degree applied on the original graph, 1) the similarity between the original graph and the generated augmented graph gradually decreases; 2) the discrimination between all nodes within each augmented view gradually increases. In this paper, we argue that both such prior information can be incorporated (differently) into the contrastive learning paradigm following our general ranking framework. In particular, we first interpret CL as a special case of learning to rank (L2R), which inspires us to leverage the ranking order among positive augmented views. Meanwhile, we introduce a self-ranking paradigm to ensure that the discriminative information among different nodes can be maintained and also be less altered to the perturbations of different degrees. Experiment results on various benchmark datasets verify the effectiveness of our algorithm compared with the supervised and unsupervised models.
translated by 谷歌翻译
最小化未标记数据的预测不确定性是在半监督学习(SSL)中实现良好性能的关键因素。预测不确定性通常表示为由输出空间中的转换概率计算的\ emph {熵}。大多数现有工程通过接受确定类(具有最大概率)作为真实标签或抑制微妙预测(具有较小概率)来蒸馏低熵预测。无论如何,这些蒸馏策略通常是模型培训的启发式和更少的信息。从这种辨别中,本文提出了一个名为自适应锐化(\ ADS)的双机制,首先将软阈值应用于自适应掩盖确定和可忽略不计的预测,然后无缝地锐化通知的预测,与通知的预测蒸馏出某些预测只要。更重要的是,我们通过与各种蒸馏策略进行比较理论上,从理论上分析\广告的特征。许多实验验证\广告通过使其显着提高了最先进的SSL方法。我们提出的\ ADS为未来蒸馏的SSL研究造成一个基石。
translated by 谷歌翻译
相对属性(RA),参考在特定属性的强度上的两个图像上的偏好,可以使由于其丰富的语义信息来实现良好的图像到图像转换。然而,基于RAS的现有工作未能调和细粒度翻译的目标以及高质量一代的目标。我们提出了一个新的模型之旅,以协调这两个目标,以获得高质量的细粒度翻译。特别是,我们同时培训了两个模块:一个发电机,它将输入图像转换为所需图像,具有相对于感兴趣的属性的平滑微妙变化;和排名由输入图像和所需图像组成的竞争偏好的排名。竞争对手的偏好是指对抗性排名过程:(1)排名师在所需属性方面认为所需图像和输入图像之间没有差异; (2)发电机欺骗排名师以相信所需图像根据需要在输入图像上改变属性。介绍了RAS成对的真实图像,以指导排名仪对仅对感兴趣的属性进行排名对。通过有效的排名,发电机将通过产生与输入图像相比,通过产生所需改变的高质量图像来“赢得”对抗游戏。两个面部图像数据集和一个鞋图像数据集的实验表明,我们的旅行实现了最先进的导致生成高保真图像,这表现出对感兴趣的属性的平滑变化。
translated by 谷歌翻译
本文提出了差异性批判性生成对抗网络(DICGAN),以了解只有部分而不是整个数据集具有所需属性时用户呈现数据的分布。 Dicgan生成了满足用户期望的所需数据,并可以协助设计具有所需特性的生物产品。现有方法首先选择所需的样品,然后在选定样品上训练常规甘斯以得出用户呈现的数据分布。但是,所需数据的选择取决于整个数据集的全球知识和监督。 Dicgan介绍了一个差异评论家,该评论家从成对的偏好中学习,这些偏好是本地知识,可以在培训数据的一部分中定义。批评家是通过定义与瓦斯坦斯坦·甘(Wasserstein Gan)批评家的额外排名损失来建立的。它赋予每对样本之间的评论值差异,并具有用户喜好,并指导所需数据的生成而不是整个数据。为了获得更有效的解决方案以确保数据质量,我们将Dicgan进一步重新重新将其作为约束优化问题,基于理论上证明了我们的Dicgan的收敛性。对具有各种应用程序的各种数据集进行的广泛实验表明,我们的Dicgan在学习用户呈现的数据分布方面取得了最新的性能,尤其是在不足的所需数据和有限的监督下。
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with multiple modalities like images. However, current MMEA algorithms all adopt KG-level modality fusion strategies but ignore modality differences among individual entities, hurting the robustness to potential noise involved in modalities (e.g., unidentifiable images and relations). In this paper we present MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, to dynamically predict the mutual correlation coefficients among modalities for instance-level feature fusion. A modal-aware hard entity replay strategy is also proposed for addressing vague entity details. Extensive experimental results show that our model not only achieves SOTA performance on multiple training scenarios including supervised, unsupervised, iterative, and low resource, but also has limited parameters, optimistic speed, and good interpretability. Our code will be available soon.
translated by 谷歌翻译
Deep learning has been widely used for protein engineering. However, it is limited by the lack of sufficient experimental data to train an accurate model for predicting the functional fitness of high-order mutants. Here, we develop SESNet, a supervised deep-learning model to predict the fitness for protein mutants by leveraging both sequence and structure information, and exploiting attention mechanism. Our model integrates local evolutionary context from homologous sequences, the global evolutionary context encoding rich semantic from the universal protein sequence space and the structure information accounting for the microenvironment around each residue in a protein. We show that SESNet outperforms state-of-the-art models for predicting the sequence-function relationship on 26 deep mutational scanning datasets. More importantly, we propose a data augmentation strategy by leveraging the data from unsupervised models to pre-train our model. After that, our model can achieve strikingly high accuracy in prediction of the fitness of protein mutants, especially for the higher order variants (> 4 mutation sites), when finetuned by using only a small number of experimental mutation data (<50). The strategy proposed is of great practical value as the required experimental effort, i.e., producing a few tens of experimental mutation data on a given protein, is generally affordable by an ordinary biochemical group and can be applied on almost any protein.
translated by 谷歌翻译